循环网络搭建

基础结构

RNN基础单元:

1594893699477.png

在RNN基础单元中,隐藏层状态由输入层和上一时刻的状态决定,即隐藏层的计算是$X_tW_{xh} + H_{t-1}W_{hh}$ ,而这个计算可以通过将$X_t$与$H_{t-1}$连结后的矩阵与$W_{xh}$与$W_{hh}$连结后的矩阵相乘得到。

1
2
3
4
5
6
7
8
import torch

X, W_xh = torch.randn(3, 1), torch.randn(1, 4)
H, W_hh = torch.randn(3, 4), torch.randn(4, 4)
# 先乘后加
torch.matmul(X, W_xh) + torch.matmul(H, W_hh)
# 先连结后乘
torch.matmul(torch.cat((X, H), dim=1), torch.cat((W_xh, W_hh), dim=0))

这两种方法得到的结果都是一样的。

梯度计算

以时间序列为3的计算图为例:

1594721085621.png

其前向计算公式为:

从L开始倒推梯度:

输出层参数:(sum是因为共享参数,乘积是链式法则)

在计算隐藏层参数$W_{hh}$ 时,它们通过隐藏层状态$h_t$产生了依赖关系。因此先计算最终时间步T的$h_T$的梯度:

而对于$t < T$的情况,需要从后往前计算,这样对于每个$h_t$的梯度在计算时,后面依赖它的$h$的梯度都被计算出了。则根据链式求导法则$h_t$的梯度包含$o_t$和之后的$h$ 对$h_t$的偏导。于是这变成了一个关于$h_t$的递推式:

由上式中的指数项可见,当$T$与$t$之差较大时,容易出现梯度衰减和爆炸。而$h_t$的梯度也会影响其他参数的梯度,比如$W_{hh}、W_{hx}$ ,它们在隐藏层间共享参数,$h_1, \cdots, h_T$都依赖它们,因此:

读取数据集

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import torch
import random


with open('../input/jaychou-lyrics/jaychou_lyrics.txt') as f:
corpus_chars = f.read() # string
corpus_chars[:40] #


corpus_chars = corpus_chars.replace('\n', ' ').replace('\r', ' ')
corpus_chars = corpus_chars[0:10000]


## 建立字符索引
idx_to_char = list(set(corpus_chars)) # 先转为set集合,使得无重复字符,再转为list,方便索引访问
char_to_idx = dict([(char, i) for i, char in enumerate(idx_to_char)]) # 映射组成字典
vocab_size = len(char_to_idx)
vocab_size # 1027个汉字

# 提取文本索引
corpus_indices = [char_to_idx[char] for char in corpus_chars] # 用它作为语料库
sample = corpus_indices[:20]
print('chars:', ''.join([idx_to_char[idx] for idx in sample]))
print('indices:', sample)

采样时序数据

  • 随机采样

下面的代码每次从数据里随机采样一个小批量。其中批量大小batch_size指每个小批量的样本数,num_steps为每个样本所包含的时间步数(一个时间序列)。 在随机采样中,每个样本是原始序列上任意截取的一段序列。相邻的两个随机小批量在原始序列上的位置不一定相毗邻。因此,我们无法用一个小批量最终时间步的隐藏状态来初始化下一个小批量的隐藏状态。在训练模型时,每次随机采样前都需要重新初始化隐藏状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 参数包含语料索引,批量大小,每个样本的序列长度
def data_iter_random(corpus_indices, batch_size, num_steps, device=None):
# 减1是因为样本y需要在x的基础上向后偏移一位
num_examples = (len(corpus_indices) - 1) // num_steps # 可采样出的样本数量

epoch_size = num_examples // batch_size # 遍历样本需要执行多少个epoch

example_indices = list(range(num_examples))
random.shuffle(example_indices) # 对样本洗牌

# 返回从pos开始的长为num_steps的序列,即一个样本的序列长度
def _data(pos):
return corpus_indices[pos: pos + num_steps]
if device is None:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

## 迭代取样本,需迭代epoch_size次,每次返回一个batch_size大小的X和Y
for i in range(epoch_size):
# 每次读取batch_size个随机样本
i = i * batch_size
batch_indices = example_indices[i: i + batch_size]
X = [_data(j * num_steps) for j in batch_indices]
Y = [_data(j * num_steps + 1) for j in batch_indices]
yield torch.tensor(X, dtype=torch.float32, device=device), torch.tensor(Y, dtype=torch.float32, device=device)
  • 相邻采样

注意相邻采样是在连续的epoch之间相邻,若一个epoch中有多个batch_size,则不同batch之间的样本不一定连续。因此可将样本按batch_size分割,再依次按epoch顺序取。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 
def data_iter_consecutive(corpus_indices, batch_size, num_steps, device=None):
if device is None:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
corpus_indices = torch.tensor(corpus_indices, dtype=torch.float32, device=device)
data_len = len(corpus_indices)
batch_len = data_len // batch_size # 按batch分割
#在每个批次(batch)中,数据是相邻取的,因此可直接将数据分成几个batch再顺序取(不同batch之间不是相邻的)
indices = corpus_indices[0: batch_size*batch_len].view(batch_size, batch_len) # 保证对齐
epoch_size = (batch_len - 1) // num_steps # 样本Y后延一位
for i in range(epoch_size):
i = i * num_steps
X = indices[:, i: i + num_steps]
Y = indices[:, i + 1: i + num_steps + 1]
yield X, Y

for X, Y in data_iter_consecutive(my_seq, batch_size=2, num_steps=6):
print('X: ', X, '\nY:', Y, '\n')

基础RNN

手动实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
import time
import math
import numpy as np
import torch
from torch import nn, optim
import torch.nn.functional as F

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
#(corpus_indices, char_to_idx, idx_to_char, vocab_size) = load_data_jay_lyrics()

###################### 将词按序号转化为词向量 ############################
def one_hot(x, n_class, dtype=torch.float32):
# x shape: (batch) , output shape: (batch, n_class) 输出batch个n维向量
x = x.long()
res = torch.zeros(x.shape[0], n_class, dtype=dtype, device=x.device)
# torch.gather(input, dim, index) 在dim维度上取
# orch.Tensor.scatter_(dim, index, src) 在dim维度上放src(按照index)
res.scatter_(1, x.view(-1, 1), 1)
return res
# 查看效果
x = torch.tensor([0, 2])
one_hot(x, vocab_size) # 2 * n_class

def to_onehot(X, n_class): # 输入变为一个序列长度的多个样本
# X shape: (batch, seq_len), output: seq_len elements of (batch, n_class)
return [one_hot(X[:, i], n_class) for i in range(X.shape[1])]
# 查看效果
X = torch.arange(10).view(2, 5)
inputs = to_onehot(X, vocab_size)
print(len(inputs), inputs[0].shape) # 5个2 * 1027的矩阵组成的列表

########################### 初始化参数 ##########################
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():
def _one(shape):
ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)
return torch.nn.Parameter(ts, requires_grad=True)

# 隐藏层参数
W_xh = _one((num_inputs, num_hiddens))
W_hh = _one((num_hiddens, num_hiddens))
b_h = torch.nn.Parameter(torch.zeros(num_hiddens, device=device, requires_grad=True))
# 输出层参数
W_hq = _one((num_hiddens, num_outputs))
b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, requires_grad=True))
return nn.ParameterList([W_xh, W_hh, b_h, W_hq, b_q])

# 在随机采样中需要将隐藏层参数置零
def init_rnn_state(batch_size, num_hiddens, device):
# 初始化隐藏层状态(参数)
return (torch.zeros((batch_size, num_hiddens), device=device), )

############# RNN计算 ###########
def rnn(inputs, state, params):
# inputs和outputs皆为num_steps个形状为(batch_size, vocab_size)的矩阵
W_xh, W_hh, b_h, W_hq, b_q = params
H, = state
outputs = []
for X in inputs:
# 一个简单的RNN,隐藏层与输入X和上一个时间步的隐藏层都有关
H = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(H, W_hh) + b_h)
Y = torch.matmul(H, W_hq) + b_q
outputs.append(Y)
# 返回输出的的批次向量列表 和 隐藏层参数
return outputs, (H,)

# 前向预测,prefix为前缀字符串,基于它预测后面的
def predict_rnn(prefix, num_chars, rnn, params, init_rnn_state,
num_hiddens, vocab_size, device, idx_to_char, char_to_idx):
state = init_rnn_state(1, num_hiddens, device) # batch为1
output = [char_to_idx[prefix[0]]]
for t in range(num_chars + len(prefix) - 1):
# 将上一时间步的输出作为当前时间步的输入
X = to_onehot(torch.tensor([[output[-1]]], device=device), vocab_size)
# 计算输出和更新隐藏状态
(Y, state) = rnn(X, state, params)
# 下一个时间步的输入是prefix里的字符或者当前的最佳预测字符
if t < len(prefix) - 1:
output.append(char_to_idx[prefix[t + 1]])
else:
output.append(int(Y[0].argmax(dim=1).item()))

return ''.join([idx_to_char[i] for i in output])
# 查看效果
predict_rnn('分开', 10, rnn, params, init_rnn_state, num_hiddens, vocab_size,
device, idx_to_char, char_to_idx)

####################### 模型训练 ######################
# 梯度裁剪
def grad_clipping(params, theta, device):
# 防止梯度爆炸
norm = torch.tensor([0.0], device=device)
for param in params:
norm += (param.grad.data ** 2).sum()
norm = norm.sqrt().item() # 梯度的2范数
if norm > theta:
for param in params:
param.grad.data *= (theta / norm) # 减小梯度

def sgd(params, lr, batch_size):
for param in params:
param.data -= lr * param.grad / batch_size # 更新时用.data 以免操作计入计算图

def train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,
vocab_size, device, corpus_indices, idx_to_char,
char_to_idx, is_random_iter, num_epochs, num_steps,
lr, clipping_theta, batch_size, pred_period,
pred_len, prefixes):
if is_random_iter:
data_iter_fn = data_iter_random
else:
data_iter_fn = data_iter_consecutive
params = get_params()
loss = nn.CrossEntropyLoss()
### 交叉熵损失函数,由logsoftmax和NLLloss结合,先对one-hot计算logsoftmax,再通过NLLloss与标签
### 对比,目标是分类概率最大。其参数前面是(x, y) ,x是预测的one-hot型数据,形状是(batch, vocab_size)
### y是标签,(batch)

for epoch in range(num_epochs):
if not is_random_iter: # 如使用相邻采样,在epoch开始时初始化隐藏状态
state = init_rnn_state(batch_size, num_hiddens, device)
l_sum, n, start = 0.0, 0, time.time()
data_iter = data_iter_fn(corpus_indices, batch_size, num_steps, device)
for X, Y in data_iter:
if is_random_iter: # 如使用随机采样,在每个小批量更新前初始化隐藏状态
state = init_rnn_state(batch_size, num_hiddens, device)
else:
# 否则需要使用detach函数从计算图分离隐藏状态, 这是为了
# 使模型参数的梯度计算只依赖一次迭代读取的小批量序列(防止梯度计算开销太大)
for s in state: # h
s.detach_()

inputs = to_onehot(X, vocab_size)
# outputs有num_steps个形状为(batch_size, vocab_size)的矩阵
(outputs, state) = rnn(inputs, state, params)
# 拼接之后形状为(num_steps * batch_size, vocab_size)
outputs = torch.cat(outputs, dim=0)
# Y的形状是(batch_size, num_steps),转置后再变成长度为
# batch * num_steps 的向量,这样跟输出的行一一对应
y = torch.transpose(Y, 0, 1).contiguous().view(-1)
# 使用交叉熵损失计算平均分类误差
l = loss(outputs, y.long())

# 梯度清0
if params[0].grad is not None:
for param in params:
param.grad.data.zero_()
l.backward()
grad_clipping(params, clipping_theta, device) # 裁剪梯度
sgd(params, lr, 1) # 因为误差已经取过均值,梯度不用再做平均
l_sum += l.item() * y.shape[0]
n += y.shape[0]

if (epoch + 1) % pred_period == 0:
print('epoch %d, perplexity %f, time %.2f sec' % (
epoch + 1, math.exp(l_sum / n), time.time() - start))
for prefix in prefixes:
print(' -', predict_rnn(prefix, pred_len, rnn, params, init_rnn_state,
num_hiddens, vocab_size, device, idx_to_char, char_to_idx))


############ 测试训练
num_epochs, num_steps, batch_size, lr, clipping_theta = 250, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 50, 50, ['分开', '不分开']

# 随机采样,相邻采样设置False即可
train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,
vocab_size, device, corpus_indices, idx_to_char,
char_to_idx, True, num_epochs, num_steps, lr,
clipping_theta, batch_size, pred_period, pred_len,
prefixes)

torch函数实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
num_hiddens , num_steps, batch_size= 256, 35, 2
# rnn_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens) # 已测试
rnn_layer = nn.RNN(input_size=vocab_size, hidden_size=num_hiddens)
# torch的RNN输入数据形状是(seq_len, batch_size, vocab_size)

state = None
X = torch.rand(num_steps, batch_size, vocab_size)
Y, state_new = rnn_layer(X, state)
print(Y.shape, len(state_new), state_new[0].shape)

################ RNN模型,使用内置RNN_layer
class RNNModel(nn.Module):
def __init__(self, rnn_layer, vocab_size):
super(RNNModel, self).__init__()
self.rnn = rnn_layer
self.hidden_size = rnn_layer.hidden_size * (2 if rnn_layer.bidirectional else 1)
self.vocab_size = vocab_size
self.dense = nn.Linear(self.hidden_size, vocab_size)
self.state = None

def forward(self, inputs, state): # inputs: (batch, seq_len)
# 获取one-hot向量表示
X = to_onehot(inputs, self.vocab_size) # X是个以seq_len为索引长度的list
Y, self.state = self.rnn(torch.stack(X), state)
# Y.shape (seq_len, batch, num_hiddens)

# 先将Y的形状变成(num_steps * batch_size, num_hiddens)再输入全连接层,经过它计算后的输出
# 形状为(num_steps * batch_size, vocab_size)
output = self.dense(Y.view(-1, Y.shape[-1])) ##
return output, self.state

######## 前向预测 #############
def predict_rnn_pytorch(prefix, num_chars, model, vocab_size, device, idx_to_char,
char_to_idx):
state = None
output = [char_to_idx[prefix[0]]] # output会记录prefix加上输出
for t in range(num_chars + len(prefix) - 1):
X = torch.tensor([output[-1]], device=device).view(1, 1)
if state is not None:
if isinstance(state, tuple): # LSTM, state:(h, c)
state = (state[0].to(device), state[1].to(device))
else:
state = state.to(device)

(Y, state) = model(X, state)
if t < len(prefix) - 1:
output.append(char_to_idx[prefix[t + 1]])
else:
output.append(int(Y.argmax(dim=1).item()))
return ''.join([idx_to_char[i] for i in output])

############################## 模型训练 #################
model = RNNModel(rnn_layer, vocab_size).to(device)
# 使用初始化数据随机预测10个看看
predict_rnn_pytorch('分开', 10, model, vocab_size, device, idx_to_char, char_to_idx)

# 训练函数
def train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
corpus_indices, idx_to_char, char_to_idx,
num_epochs, num_steps, lr, clipping_theta,
batch_size, pred_period, pred_len, prefixes):
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
model.to(device)
state = None
for epoch in range(num_epochs):
l_sum, n, start = 0.0, 0, time.time()
data_iter = data_iter_consecutive(corpus_indices, batch_size, num_steps, device) # 相邻采样
for X, Y in data_iter:
if state is not None:
# 使用detach函数从计算图分离隐藏状态, 这是为了
# 使模型参数的梯度计算只依赖一次迭代读取的小批量序列(防止梯度计算开销太大)
if isinstance (state, tuple): # LSTM, state:(h, c)
state = (state[0].detach(), state[1].detach())
else:
state = state.detach()

(output, state) = model(X, state) # output: 形状为(num_steps * batch_size, vocab_size)

# Y的形状是(batch_size, num_steps),转置后再变成长度为
# batch * num_steps 的向量,这样跟输出的行一一对应
y = torch.transpose(Y, 0, 1).contiguous().view(-1)
l = loss(output, y.long())

optimizer.zero_grad()
l.backward()
# 梯度裁剪
grad_clipping(model.parameters(), clipping_theta, device)
optimizer.step()
l_sum += l.item() * y.shape[0]
n += y.shape[0]

try:
perplexity = math.exp(l_sum / n)
except OverflowError:
perplexity = float('inf')
if (epoch + 1) % pred_period == 0:
print('epoch %d, perplexity %f, time %.2f sec' % (
epoch + 1, perplexity, time.time() - start))
for prefix in prefixes:
print(' -', predict_rnn_pytorch(
prefix, pred_len, model, vocab_size, device, idx_to_char,
char_to_idx))

# 设置参数训练
num_epochs, batch_size, lr, clipping_theta = 250, 32, 1e-3, 1e-2 # 注意这里的学习率设置
pred_period, pred_len, prefixes = 50, 50, ['分开', '不分开']
train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
corpus_indices, idx_to_char, char_to_idx,
num_epochs, num_steps, lr, clipping_theta,
batch_size, pred_period, pred_len, prefixes)

GRU门控单元

1594893699477.png

重置门和更新门的输入均为当前时间步的输入$X_t$与上一时间步的隐藏状态$H_{t-1}$, 激活函数都是sigmoid函数,因此它们的输出值域是$(0, 1)$,重置门的输出$R_t$用于得到候选隐藏状态,而更新门的输出$Z_t$用于在得到候选隐藏状态后确定该隐藏状态。

其中,候选隐藏状态计算为:

其中$\odot$是元素乘法, $R_t$在$(0, 1)$之间,因此重置门的作用可以看作用来筛选之前的历史信息$H_{t-1}$。

隐藏状态的计算为:

可见更新门在候选状态与上一时间步的状态作取舍,它的作用可以看作保存之前时刻的状态。

这样这两个的门的设计:重置门可理解为获取短期依赖关系,更新门可理解为获取长期依赖关系。

手动实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 初始化参数
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():
def _one(shape):
ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)
return torch.nn.Parameter(ts, requires_grad=True)
def _three():
return (_one((num_inputs, num_hiddens)),
_one((num_hiddens, num_hiddens)),
torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))

W_xz, W_hz, b_z = _three() # 更新门参数
W_xr, W_hr, b_r = _three() # 重置门参数
W_xh, W_hh, b_h = _three() # 候选隐藏状态参数

# 输出层参数
W_hq = _one((num_hiddens, num_outputs))
b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q])

# 初始化隐藏状态参数
def init_gru_state(batch_size, num_hiddens, device):
return (torch.zeros((batch_size, num_hiddens), device=device), )

定义GRU计算

1
2
3
4
5
6
7
8
9
10
11
12
def gru(inputs, state, params):
W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
H, = state
outputs = []
for X in inputs:
Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z)
R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r)
H_tilda = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(R * H, W_hh) + b_h)
H = Z * H + (1 - Z) * H_tilda
Y = torch.matmul(H, W_hq) + b_q
outputs.append(Y)
return outputs, (H,)

直接使用之间的训练函数训练

1
2
3
4
5
6
7
8
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens,
vocab_size, device, corpus_indices, idx_to_char,
char_to_idx, False, num_epochs, num_steps, lr,
clipping_theta, batch_size, pred_period, pred_len,
prefixes)

torch函数实现

直接调用GRU层即可

1
2
3
4
5
6
7
lr = 1e-2 # 注意调整学习率
gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens)
model = RNNModel(gru_layer, vocab_size).to(device)
train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
corpus_indices, idx_to_char, char_to_idx,
num_epochs, num_steps, lr, clipping_theta,
batch_size, pred_period, pred_len, prefixes)

长短期记忆

长短期记忆(long short-term memory,LSTM)中包含三个门,输入门(input gate)、遗忘门(forget gate)和输出门(output gate),以及与隐藏状态形状相同的记忆细胞。

1595067353701.png

上图可以清晰的看出LSTM单元的计算过程,其中三个门控单元都是由上一时间步的隐藏状态和当前时间步的输入通过sigmoid函数得到,候选记忆细胞与它们的区别只是使用了tanh函数。然后通过这四个单元计算当前时间步的记忆细胞$C_t$和隐藏状态$H_t$。

手动实现

  • 获取参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():
def _one(shape):
ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)
return torch.nn.Parameter(ts, requires_grad=True)
def _three():
return (_one((num_inputs, num_hiddens)),
_one((num_hiddens, num_hiddens)),
torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))

W_xi, W_hi, b_i = _three() # 输入门参数
W_xf, W_hf, b_f = _three() # 遗忘门参数
W_xo, W_ho, b_o = _three() # 输出门参数
W_xc, W_hc, b_c = _three() # 候选记忆细胞参数

# 输出层参数
W_hq = _one((num_hiddens, num_outputs))
b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
return nn.ParameterList([W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q])
  • 初始化变量

LSTM会额外返回一个形状与隐藏状态相同的记忆细胞

1
2
3
def init_lstm_state(batch_size, num_hiddens, device):
return (torch.zeros((batch_size, num_hiddens), device=device),
torch.zeros((batch_size, num_hiddens), device=device))
  • LSTM计算

其中记忆细胞$C_t$只在隐藏层间计算,不流动到输出层,输出层只计算隐藏状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def lstm(inputs, state, params):
[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
(H, C) = state
outputs = []
for X in inputs:
I = torch.sigmoid(torch.matmul(X, W_xi) + torch.matmul(H, W_hi) + b_i)
F = torch.sigmoid(torch.matmul(X, W_xf) + torch.matmul(H, W_hf) + b_f)
O = torch.sigmoid(torch.matmul(X, W_xo) + torch.matmul(H, W_ho) + b_o)
C_tilda = torch.tanh(torch.matmul(X, W_xc) + torch.matmul(H, W_hc) + b_c)
C = F * C + I * C_tilda
H = O * C.tanh()
Y = torch.matmul(H, W_hq) + b_q # 输出层
outputs.append(Y)
return outputs, (H, C)
  • 训练
1
2
3
4
5
6
7
8
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

train_and_predict_rnn(lstm, get_params, init_lstm_state, num_hiddens,
vocab_size, device, corpus_indices, idx_to_char,
char_to_idx, False, num_epochs, num_steps, lr,
clipping_theta, batch_size, pred_period, pred_len,
prefixes)

torch函数实现

调用即可

1
2
3
4
5
6
7
lr = 1e-2 # 注意调整学习率
lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens)
model = RNNModel(lstm_layer, vocab_size)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
corpus_indices, idx_to_char, char_to_idx,
num_epochs, num_steps, lr, clipping_theta,
batch_size, pred_period, pred_len, prefixes)